Future Proof AI
Inference IP 

Scales from 1 to 864 TOPs
Serves All Markets - Includes Safety Enhanced Version for Automotive Designs
Runs Classic ML Models, LLMs, Transformers - Every Operator, Every Network!

SIMPLIFY YOUR SOC DESIGN

And Speed Up Porting of New ML Models

Quadric has the leading processor architecture optimized for on-device artificial intelligence computing. Only the Quadric Chimera GPNPU delivers high ML inference performance and also runs complex C++ code without forcing the developer to artificially partition code between two or three different kinds of processors.

Quadric’s Chimera GPNPU is a licensable processor that scales from 1 to 864 TOPs.  Chimera GPNPUs run all ML networks - including classical backbones, vision transformers, and large language models.

simplify your soc design

And Speed Up Porting of New ML Models

Quadric is the leading processor architecture optimized for on-device artificial intelligence computing. Only the Quadric Chimera GPNPU delivers high ML inference performance and also runs complex C++ code without forcing the developer to artificially partition code between two or three different kinds of processors.

Quadric’s Chimera GPNPU is a licensable processor that scales from 1 to 864 TOPs.  Chimera GPNPUs run all ML networks - including classical backbones, vision transformers, and large language models.

Design your Soc faster with Chimera GPNPU

One architecture for ML inference plus pre-and-post processing simplifies SoC hardware design and radically speeds up ML model porting and software  application programming.

Three REASONS TO CHOOSE the chimera GPNPU

1
Handles matrix and vector operations and scalar (control) code in one execution pipeline. No need to artificially partition application code (C++ code, ML graph code) between different kinds of processors.
2
Runs EVERY kind of model - from classic backbones, to vision transformers, to Large Language Models (LLMs)
3
Up to 864 TOPs. A solution for every application segment - including ASIL-ready cores for Automotive applications
Find out more about the chimera GPNPU

Chimera SDK and Quadric DevStudio 

Quadric’s SDK features world-class compilers, and DevStudio provides easy visualization of SoC design choices.
Learn more

Quadric Insights

Does Your NPU Do VLMs? It Better!

What’s a VLM? Vision Language Models are a rapidly emerging class of multimodal AI models that’s becoming much more important in the automotive world for the ADAS sector. VLMs are built from a combination of […]

Read More
It’s Not All About the MACs: Why “Offload” Fails

A common approach in the industry to building an on-device machine learning inference accelerator has relied on the simple idea of building an array of high-performance multiply-accumulate circuits – a MAC accelerator. This accelerator was […]

Read More
Evaluating AI/ML Processors – Why Batch Size Matters

If you are comparing alternatives for an NPU selection, give special attention to clearly identifying how the NPU/GPNPU will be used in your target system, and make sure the NPU/GPNPU vendor is reporting the benchmarks […]

Read More
Does Your NPU Do VLMs? It Better!

What’s a VLM? Vision Language Models are a rapidly emerging class of multimodal AI models that’s becoming much more important in the automotive world for the ADAS sector. VLMs are built from a combination of […]

Read More
It’s Not All About the MACs: Why “Offload” Fails

A common approach in the industry to building an on-device machine learning inference accelerator has relied on the simple idea of building an array of high-performance multiply-accumulate circuits – a MAC accelerator. This accelerator was […]

Read More
Explore more quadric blogs

© Copyright 2024  Quadric    All Rights Reserved     Privacy Policy

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram